翻訳と辞書 |
Rectifier (neural networks) : ウィキペディア英語版 | Rectifier (neural networks)
In the context of artificial neural networks, the rectifier is an activation function defined as : where ''x'' is the input to a neuron. This is a ramp function. This activation function has been argued to be more biologically plausible than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, , the most popular activation function for deep neural networks. A unit employing the rectifier is also called a rectified linear unit (ReLU).〔 A smooth approximation to the rectifier is the analytic function : which is called the softplus function.〔C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, R. Garcia, NIPS'2000, (2001),(Incorporating Second-Order Functional Knowledge for Better Option Pricing )〕 The derivative of softplus is , i.e. the logistic function. Rectified linear units find applications in computer vision using deep neural nets.〔 == Variants ==
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Rectifier (neural networks)」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|